Computation of the nonnegative canonical tensor decomposition with two accelerated proximal gradient algorithms

نویسندگان

چکیده

Multidimensional signal analysis has become an important part of many processing problems. This type allows to take advantage different diversities a in order extract useful information. paper focuses on the design and development multidimensional data decomposition algorithms called Canonical Polyadic (CP) tensor decomposition, powerful tool variety real-world applications due its uniqueness ease interpretation factor matrices. More precisely, it is desired compute simultaneously matrices involved CP real nonnegative tensor, under constraints. For this purpose, two proximal are proposed, Monotone Accelerated Proximal Gradient (M-APG) Non-monotone (Nm-APG) algorithms. These implemented via regularization function with simple control strategy capable efficiently taking previous iterations. Simulation results demonstrate better performance proposed terms accuracy when compared other literature.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Alternating proximal gradient method for sparse nonnegative Tucker decomposition

Multi-way data arises inmany applications such as electroencephalography classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to find the hidden factors and elicit the intrinsic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the p...

متن کامل

Algorithms for Nonnegative Tensor Factorization

Nonnegative Matrix Factorization (NMF) is an efficient technique to approximate a large matrix containing only nonnegative elements as a product of two nonnegative matrices of significantly smaller size. The guaranteed nonnegativity of the factors is a distinctive property that other widely used matrix factorization methods do not have. Matrices can also be seen as second-order tensors. For som...

متن کامل

Accelerated Proximal Gradient Methods for Nonconvex Programming

Nonconvex and nonsmooth problems have recently received considerable attention in signal/image processing, statistics and machine learning. However, solving the nonconvex and nonsmooth optimization problems remains a big challenge. Accelerated proximal gradient (APG) is an excellent method for convex programming. However, it is still unknown whether the usual APG can ensure the convergence to a...

متن کامل

An Accelerated Proximal Coordinate Gradient Method

We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems. In particular, our method achieves faster linear convergence rates for minimizing strongly convex functions than existing randomized proximal coordinate gradient methods. We show how to apply the APCG method to solve the dual of the regularized em...

متن کامل

Distributed Accelerated Proximal Coordinate Gradient Methods

We develop a general accelerated proximal coordinate descent algorithm in distributed settings (DisAPCG) for the optimization problem that minimizes the sum of two convex functions: the first part f is smooth with a gradient oracle, and the other one Ψ is separable with respect to blocks of coordinate and has a simple known structure (e.g., L1 norm). Our algorithm gets new accelerated convergen...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Digital Signal Processing

سال: 2022

ISSN: ['1051-2004', '1095-4333']

DOI: https://doi.org/10.1016/j.dsp.2022.103682